完全监督的显着对象检测(SOD)方法取得了长足的进步,但是这种方法通常依赖大量的像素级注释,这些注释耗时且耗时。在本文中,我们专注于混合标签下的新的弱监督SOD任务,其中监督标签包括传统无监督方法生成的大量粗标签和少量的真实标签。为了解决此任务中标签噪声和数量不平衡问题的问题,我们设计了一个新的管道框架,采用三种复杂的培训策略。在模型框架方面,我们将任务分解为标签细化子任务和显着对象检测子任务,它们相互合作并交替训练。具体而言,R-NET设计为配备有指导和聚合机制的搅拌机的两流编码器模型(BGA),旨在纠正更可靠的伪标签的粗标签,而S-NET是可更换的。由当前R-NET生成的伪标签监督的SOD网络。请注意,我们只需要使用训练有素的S-NET进行测试。此外,为了确保网络培训的有效性和效率,我们设计了三种培训策略,包括替代迭代机制,小组智慧的增量机制和信誉验证机制。五个草皮基准的实验表明,我们的方法在定性和定量上都针对弱监督/无监督/无监督的方法实现了竞争性能。
translated by 谷歌翻译
Covid-19的传播给世界带来了巨大的灾难,自动分割感染区域可以帮助医生快速诊断并减少工作量。但是,准确和完整的分割面临一些挑战,例如散射的感染区分布,复杂的背景噪声和模糊的分割边界。为此,在本文中,我们提出了一个新的网络,用于从CT图像(名为BCS-NET)的自动covid-19肺部感染分割,该网络考虑了边界,上下文和语义属性。 BCS-NET遵循编码器架构,更多的设计集中在解码器阶段,该阶段包括三个逐渐边界上下文 - 语义重建(BCSR)块。在每个BCSR块中,注意引导的全局上下文(AGGC)模块旨在通过突出显示重要的空间和边界位置并建模全局上下文依赖性来学习解码器最有价值的编码器功能。此外,语义指南(SG)单元通过在中间分辨率上汇总多规模的高级特征来生成语义指南图来完善解码器特征。广泛的实验表明,我们提出的框架在定性和定量上都优于现有竞争对手。
translated by 谷歌翻译
由于在大异构数据上加速查询时间的同时减少存储的优点,已经广泛研究了跨模型散列,以便对多模态数据的近似邻近搜索进行广泛研究。大多数散列方法假设培训数据是类平衡的。但是,在实践中,现实世界数据通常具有长尾的分布。在本文中,我们介绍了一种基于元学习的跨模态散列方法(MetacMH)来处理长尾数据。由于尾部类中缺乏培训样本,MetacMH首先从不同模式中的数据中学习直接功能,然后引入关联内存模块,以了解尾部类别的样本的存储器功能。然后,它结合了直接和内存功能以获得每个样本的元特征。对于长尾分布的头部类别的样本,直接功能的重量越大,因为有足够的训练数据来学习它们;虽然对于罕见的类,但内存功能的重量越大。最后,MetacMH使用似然损耗函数来保持不同模式中的相似性,并以端到端的方式学习哈希函数。长尾数据集的实验表明,MetacMH比最先进的方法表现出明显好,特别是在尾部课上。
translated by 谷歌翻译
跨模态散列(CMH)是跨模型近似最近邻搜索中最有前途的方法之一。大多数CMH解决方案理想地假设培训和测试集的标签是相同的。但是,通常违反假设,导致零拍摄的CMH问题。最近解决此问题的努力侧重于使用标签属性将知识转移到未见的类。但是,该属性与多模态数据的特征隔离。为了减少信息差距,我们介绍了一种名为LAEH的方法(嵌入零拍跨模型散列的标签属性)。 Laeh首先通过Word2Vec模型获取标签的初始语义属性向量,然后使用转换网络将它们转换为常见的子空间。接下来,它利用散列向量和特征相似矩阵来指导不同方式的特征提取网络。与此同时,Laeh使用属性相似性作为标签相似度的补充,以纠正标签嵌入和常见子空间。实验表明,Laeh优于相关代表零射和跨模态散列方法。
translated by 谷歌翻译
Salient object detection (SOD) aims to determine the most visually attractive objects in an image. With the development of virtual reality technology, 360{\deg} omnidirectional image has been widely used, but the SOD task in 360{\deg} omnidirectional image is seldom studied due to its severe distortions and complex scenes. In this paper, we propose a Multi-Projection Fusion and Refinement Network (MPFR-Net) to detect the salient objects in 360{\deg} omnidirectional image. Different from the existing methods, the equirectangular projection image and four corresponding cube-unfolding images are embedded into the network simultaneously as inputs, where the cube-unfolding images not only provide supplementary information for equirectangular projection image, but also ensure the object integrity of the cube-map projection. In order to make full use of these two projection modes, a Dynamic Weighting Fusion (DWF) module is designed to adaptively integrate the features of different projections in a complementary and dynamic manner from the perspective of inter and intra features. Furthermore, in order to fully explore the way of interaction between encoder and decoder features, a Filtration and Refinement (FR) module is designed to suppress the redundant information between the feature itself and the feature. Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively.
translated by 谷歌翻译
Convolutional Neural Network (CNN)-based image super-resolution (SR) has exhibited impressive success on known degraded low-resolution (LR) images. However, this type of approach is hard to hold its performance in practical scenarios when the degradation process is unknown. Despite existing blind SR methods proposed to solve this problem using blur kernel estimation, the perceptual quality and reconstruction accuracy are still unsatisfactory. In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model. We propose a components decomposition and co-optimization network (CDCN) for blind SR. Firstly, CDCN decomposes the input LR image into structure and detail components in feature space. Then, the mutual collaboration block (MCB) is presented to exploit the relationship between both two components. In this way, the detail component can provide informative features to enrich the structural context and the structure component can carry structural context for better detail revealing via a mutual complementary manner. After that, we present a degradation-driven learning strategy to jointly supervise the HR image detail and structure restoration process. Finally, a multi-scale fusion module followed by an upsampling layer is designed to fuse the structure and detail features and perform SR reconstruction. Empowered by such degradation-based components decomposition, collaboration, and mutual optimization, we can bridge the correlation between component learning and degradation modelling for blind SR, thereby producing SR results with more accurate textures. Extensive experiments on both synthetic SR datasets and real-world images show that the proposed method achieves the state-of-the-art performance compared to existing methods.
translated by 谷歌翻译
Existing convolutional neural networks (CNN) based image super-resolution (SR) methods have achieved impressive performance on bicubic kernel, which is not valid to handle unknown degradations in real-world applications. Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation. However, their results still remain visible artifacts and detail distortion due to the estimation errors. To alleviate these problems, in this paper, we propose an effective and kernel-free network, namely DSSR, which enables recurrent detail-structure alternative optimization without blur kernel prior incorporation for blind SR. Specifically, in our DSSR, a detail-structure modulation module (DSMM) is built to exploit the interaction and collaboration of image details and structures. The DSMM consists of two components: a detail restoration unit (DRU) and a structure modulation unit (SMU). The former aims at regressing the intermediate HR detail reconstruction from LR structural contexts, and the latter performs structural contexts modulation conditioned on the learned detail maps at both HR and LR spaces. Besides, we use the output of DSMM as the hidden state and design our DSSR architecture from a recurrent convolutional neural network (RCNN) view. In this way, the network can alternatively optimize the image details and structural contexts, achieving co-optimization across time. Moreover, equipped with the recurrent connection, our DSSR allows low- and high-level feature representations complementary by observing previous HR details and contexts at every unrolling time. Extensive experiments on synthetic datasets and real-world images demonstrate that our method achieves the state-of-the-art against existing methods. The source code can be found at https://github.com/Arcananana/DSSR.
translated by 谷歌翻译
尽管已经开发了疫苗,并且国家疫苗接种率正在稳步提高,但2019年冠状病毒病(COVID-19)仍对世界各地的医疗保健系统产生负面影响。在当前阶段,从CT图像中自动分割肺部感染区域对于诊断和治疗COVID-19至关重要。得益于深度学习技术的发展,已经提出了一些针对肺部感染细分的深度学习解决方案。但是,由于分布分布,复杂的背景干扰和界限模糊,现有模型的准确性和完整性仍然不令人满意。为此,我们在本文中提出了一个边界引导的语义学习网络(BSNET)。一方面,结合顶级语义保存和渐进式语义集成的双分支语义增强模块旨在建模不同的高级特征之间的互补关系,从而促进产生更完整的分割结果。另一方面,提出了镜像对称边界引导模块,以以镜像对称方式准确检测病变区域的边界。公开可用数据集的实验表明,我们的BSNET优于现有的最新竞争对手,并实现了44 fps的实时推理速度。
translated by 谷歌翻译
立体类像素细分旨在通过左右视图将离散的像素分组为感知区域,以更加协作和高效地分组。现有的Superpixel分割算法主要利用颜色和空间特征作为输入,这可能会对空间信息施加强大的约束,同时利用立体声图像对的差异信息。为了减轻此问题,我们提出了一种立体声超级像素细分方法,并在本工作中具有空间信息的脱钩机制。为了解除立体视差信息和空间信息,在融合立体声图像对的特征之前,暂时删除空间信息,并提出了脱钩的立体声融合模块(DSFM),以处理立体声的特征特征特征对齐和遮挡问题。此外,由于空间信息对于超像素分割至关重要,因此我们进一步设计一个动态空间嵌入模块(DSEM)以重新添加空间信息,并且将通过DSEM中的DSEM进行自适应调整空间信息的权重(DF)用于实现更好的细分。全面的实验结果表明,我们的方法可以在KITTI2015和CityScapes数据集上实现最新性能,并且还可以在NJU2K数据集上的显着对象检测中验证效率。源代码将在接受纸张后公开提供。
translated by 谷歌翻译
The paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our Zero-DCE to face detection in the dark are discussed.
translated by 谷歌翻译